Neural oscillations in the temporal pole for a temporally congruent audio-visual speech detection task

نویسندگان

  • Takefumi Ohki
  • Atsuko Gunji
  • Yuichi Takei
  • Hidetoshi Takahashi
  • Yuu Kaneko
  • Yosuke Kita
  • Naruhito Hironaga
  • Shozo Tobimatsu
  • Yoko Kamio
  • Takashi Hanakawa
  • Masumi Inagaki
  • Kazuo Hiraki
چکیده

Though recent studies have elucidated the earliest mechanisms of processing in multisensory integration, our understanding of how multisensory integration of more sustained and complicated stimuli is implemented in higher-level association cortices is lacking. In this study, we used magnetoencephalography (MEG) to determine how neural oscillations alter local and global connectivity during multisensory integration processing. We acquired MEG data from 15 healthy volunteers performing an audio-visual speech matching task. We selected regions of interest (ROIs) using whole brain time-frequency analyses (power spectrum density and wavelet transform), then applied phase amplitude coupling (PAC) and imaginary coherence measurements to them. We identified prominent delta band power in the temporal pole (TP), and a remarkable PAC between delta band phase and beta band amplitude. Furthermore, imaginary coherence analysis demonstrated that the temporal pole and well-known multisensory areas (e.g., posterior parietal cortex and post-central areas) are coordinated through delta-phase coherence. Thus, our results suggest that modulation of connectivity within the local network, and of that between the local and global network, is important for audio-visual speech integration. In short, these neural oscillatory mechanisms within and between higher-level association cortices provide new insights into the brain mechanism underlying audio-visual integration.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cross-modal binding and activated attentional networks during audio-visual speech integration: a functional MRI study.

We evaluated the neural substrates of cross-modal binding and divided attention during audio-visual speech integration using functional magnetic resonance imaging. The subjects (n = 17) were exposed to phonemically concordant or discordant auditory and visual speech stimuli. Three different matching tasks were performed: auditory-auditory (AA), visual-visual (VV) and auditory-visual (AV). Subje...

متن کامل

Look Who's Talking: Speaker Detection using Video and Audio Correlation

The visual motion of the mouth and the corresponding audio data generated when a person speaks are highly correlated. This fact has been exploited for lip/speechreading and for improving speech recognition. We describe a method of automatically detecting a talking person (both spatially and temporally) using video and audio data from a single microphone. The audio-visual correlation is learned ...

متن کامل

Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions.

UNLABELLED Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focus...

متن کامل

Temporal window of integration in auditory-visual speech perception.

Forty-three normal hearing participants were tested in two experiments, which focused on temporal coincidence in auditory visual (AV) speech perception. In these experiments, audio recordings of/pa/and/ba/were dubbed onto video recordings of /ba/or/ga/, respectively (ApVk, AbVg), to produce the illusory "fusion" percepts /ta/, or /da/ [McGurk, H., & McDonald, J. (1976). Hearing lips and seeing ...

متن کامل

Phoneme Classification Using Temporal Tracking of Speech Clusters in Spectro-temporal Domain

This article presents a new feature extraction technique based on the temporal tracking of clusters in spectro-temporal features space. In the proposed method, auditory cortical outputs were clustered. The attributes of speech clusters were extracted as secondary features. However, the shape and position of speech clusters change during the time. The clusters temporally tracked and temporal tra...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره 6  شماره 

صفحات  -

تاریخ انتشار 2016